首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5696篇
  免费   1069篇
  国内免费   661篇
电工技术   256篇
综合类   665篇
化学工业   249篇
金属工艺   147篇
机械仪表   556篇
建筑科学   234篇
矿业工程   67篇
能源动力   98篇
轻工业   68篇
水利工程   97篇
石油天然气   190篇
武器工业   47篇
无线电   1417篇
一般工业技术   645篇
冶金工业   72篇
原子能技术   61篇
自动化技术   2557篇
  2024年   15篇
  2023年   105篇
  2022年   153篇
  2021年   224篇
  2020年   249篇
  2019年   208篇
  2018年   222篇
  2017年   297篇
  2016年   332篇
  2015年   380篇
  2014年   437篇
  2013年   504篇
  2012年   469篇
  2011年   565篇
  2010年   401篇
  2009年   374篇
  2008年   363篇
  2007年   362篇
  2006年   299篇
  2005年   244篇
  2004年   180篇
  2003年   183篇
  2002年   141篇
  2001年   123篇
  2000年   82篇
  1999年   82篇
  1998年   72篇
  1997年   64篇
  1996年   43篇
  1995年   45篇
  1994年   43篇
  1993年   32篇
  1992年   24篇
  1991年   27篇
  1990年   19篇
  1989年   12篇
  1988年   11篇
  1987年   9篇
  1986年   5篇
  1985年   8篇
  1984年   6篇
  1983年   3篇
  1982年   3篇
  1981年   1篇
  1980年   2篇
  1978年   2篇
  1951年   1篇
排序方式: 共有7426条查询结果,搜索用时 31 毫秒
101.
Several estimation techniques assume validity of Gaussian approximations for estimation purposes. Interestingly, these ensemble methods have proven to work very well for high-dimensional data even when the distributions involved are not necessarily Gaussian. We attempt to bridge the gap between this oft-used computational assumption and the theoretical understanding of why this works, by employing some recent results on random projections on low dimensional subspaces and concentration inequalities.  相似文献   
102.
This article considers the likelihood ratio (LR) test for the structural change of an AR model to a threshold AR model. Under the null hypothesis, it is shown that the LR test converges weakly to the maxima of a two‐parameter vector Gaussian process. Using the approach in Chan and Tong (1990)and Chan (1991), we obtain a parameter‐free limiting distribution when the errors are normal. This distribution is novel and its percentage points are tabulated via a Monte Carlo method. Simulation studies are carried out to assess the performance of the LR test in the finite sample and a real example is given.  相似文献   
103.
In this study, a general methodology has been developed to design the proper bearing in order to eliminate the curvature of the final product in extrusion process. Three smooth curved (advanced-surface) dies with non-symmetric T-shaped sections and different off-centricities have been studied. For each die, the proper bearing has been designed and physical and numerical modeling have been performed to validate the design. The design procedure is as follows: A formulation, based on Bezier curves, has been used to determine the exit velocity profile. Since the result of Bezier method is different from the actual velocity profile, the Chitkara corrective function has been modified and applied to improve the velocity field. A deviation function has been developed to measure the curvature of the final product in terms of the exit velocity field. Considering the obtained velocity field, the friction effects, and the geometry of the dies, the proper bearing for each die has been designed. Finally, numerical and physical modeling have been performed on the die-bearing combination. According to the results, the curvature of the final product was eliminated to a great extent.  相似文献   
104.
Biological data objects often have both of the following features: (i) they are functions rather than single numbers or vectors, and (ii) they are correlated owing to phylogenetic relationships. In this paper, we give a flexible statistical model for such data, by combining assumptions from phylogenetics with Gaussian processes. We describe its use as a non-parametric Bayesian prior distribution, both for prediction (placing posterior distributions on ancestral functions) and model selection (comparing rates of evolution across a phylogeny, or identifying the most likely phylogenies consistent with the observed data). Our work is integrative, extending the popular phylogenetic Brownian motion and Ornstein–Uhlenbeck models to functional data and Bayesian inference, and extending Gaussian process regression to phylogenies. We provide a brief illustration of the application of our method.  相似文献   
105.
In general, modeling data from blocked and split-plot response surface experiments requires the use of generalized least squares and the estimation of two variance components. The literature on the optimal design of blocked and split-plot response surface experiments, however, focuses entirely on the precise estimation of the fixed factor effects and completely ignores the necessity to estimate the variance components as well. To overcome this problem, we propose a new Bayesian optimal design criterion which focuses on both the variance components and the fixed effects. A novel feature of the criterion is that it incorporates prior information about the variance components through log-normal or beta prior distributions. The resulting designs allow for a more powerful statistical inference than traditional optimal designs. In our algorithm for generating optimal blocked and split-plot designs, we implement efficient quadrature approaches for the numerical approximation of the new optimal design criterion. Supplementary materials for this article are available online.  相似文献   
106.
This paper explores the use of deep belief networks for authorship verification model applicable for continuous authentication (CA). The proposed approach uses Gaussian units in the visible layer to model real‐valued data on the basis of a Gaussian‐Bernoulli deep belief network. The lexical, syntactic, and application‐specific features are explored, leading to the proposal of a method to merge a pair of features into a single one. The CA is simulated by decomposing an online document into a sequence of short texts over which the CA decisions happen. The experimental evaluation of the proposed method uses block sizes of 140, 280, 500 characters, on the basis of the Twitter and Enron e‐mail corpuses. Promising results are obtained, which consist of an equal error rate varying from 8.21% to 16.73%. Using relatively smaller forgery samples, an equal error rate varying from 5.48% to 12.3% is also obtained for different block sizes.  相似文献   
107.
Modeling a response over a nonconvex design region is a common problem in diverse areas such as engineering and geophysics. The tools available to model and design for such responses are limited and have received little attention. We propose a new method for selecting design points over nonconvex regions that is based on the application of multidimensional scaling to the geodesic distance. Optimal designs for prediction are described, with special emphasis on Gaussian process models, followed by a simulation study and an application in glaciology. Supplementary materials for this article are available online.  相似文献   
108.
Particle filtering (PF) is a popular nonlinear estimation technique and has been widely used in a variety of applications such as target tracking. Within the PF framework, one critical design choice is the selection of the proposal distribution from which particles are drawn. In this paper, we advocate using as proposal distribution a Gaussian-mixture-based approximation of the posterior probability density function (pdf) after taking into account the most recent measurement. The novelty of our approach is that the parameters of each Gaussian used in the mixture are determined analytically to match the modes of the underlying unknown posterior pdf. As a result, particles are sampled along the most probable regions of the state space, hence reducing the probability of particle depletion. Based on the analytically determined proposal distribution, we introduce a novel PF, termed analytically guided sampling-based PF, which is validated in range-only and bearing-only target tracking.  相似文献   
109.
This paper derives a simple exact expression of symbol error probability (SEP) for general order cross QAM constellation in the presence of additive white Gaussian noise (AWGN) channel. The key idea is to obtain this novel expression from a simple analysis of the corresponding rectangular QAM. The analysis of the expression involves simple one dimensional Gaussian Q functions unlike other complex SEP expressions. A simple tight bound approximation of the proposed exact SEP is also given, which provides performance improvement over the existing SEP approximations, particularly for low signal to noise ratios (SNRs). Also, with the help of simulation results, we show that the proposed approximation is in excellent agreement with the exact SEP curve. Moreover, the proposed expressions prove to be useful for accurate estimation of the SEP in Nakagami-m fading channel, including the special case of Rayleigh fading (m = 1).  相似文献   
110.
Optical instruments are widely used for precision surface measurement. However, the dynamic range of optical instruments, in terms of measurement area and resolution, is limited by the characteristics of the imaging and the detection systems. If a large area with a high resolution is required, multiple measurements need to be conducted and the resulting datasets needs to be stitched together. Traditional stitching methods use six degrees of freedom for the registration of the overlapped regions, which can result in high computational complexity. Moreover, measurement error increases with increasing measurement data. In this paper, a stitching method, based on a Gaussian process, image registration and edge intensity data fusion, is presented. Firstly, the stitched datasets are modelled by using a Gaussian process so as to determine the mean of each stitched tile. Secondly, the datasets are projected to a base plane. In this way, the three-dimensional datasets are transformed to two-dimensional (2D) images. The images are registered by using an (x, y) translation to simplify the complexity. By using a high precision linear stage that is integral to the measurement instrument, the rotational error becomes insignificant and the cumulative rotational error can be eliminated. The translational error can be compensated by the image registration process. The z direction registration is performed by a least-squares error algorithm and the (x, y, z) translational information is determined. Finally, the overlapped regions of the measurement datasets are fused together by the edge intensity data fusion method. As a result, a large measurement area with a high resolution is obtained. A simulated and an actual measurement with a coherence scanning interferometer have been conducted to verify the proposed method. The stitching result shows that the proposed method is technically feasible for large area surface measurement.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号